107 research outputs found
Hierarchical mixture models for assessing fingerprint individuality
The study of fingerprint individuality aims to determine to what extent a
fingerprint uniquely identifies an individual. Recent court cases have
highlighted the need for measures of fingerprint individuality when a person is
identified based on fingerprint evidence. The main challenge in studies of
fingerprint individuality is to adequately capture the variability of
fingerprint features in a population. In this paper hierarchical mixture models
are introduced to infer the extent of individualization. Hierarchical mixtures
utilize complementary aspects of mixtures at different levels of the hierarchy.
At the first (top) level, a mixture is used to represent homogeneous groups of
fingerprints in the population, whereas at the second level, nested mixtures
are used as flexible representations of distributions of features from each
fingerprint. Inference for hierarchical mixtures is more challenging since the
number of unknown mixture components arise in both the first and second levels
of the hierarchy. A Bayesian approach based on reversible jump Markov chain
Monte Carlo methodology is developed for the inference of all unknown
parameters of hierarchical mixtures. The methodology is illustrated on
fingerprint images from the NIST database and is used to make inference on
fingerprint individuality estimates from this population.Comment: Published in at http://dx.doi.org/10.1214/09-AOAS266 the Annals of
Applied Statistics (http://www.imstat.org/aoas/) by the Institute of
Mathematical Statistics (http://www.imstat.org
Inference for Differential Equation Models using Relaxation via Dynamical Systems
Statistical regression models whose mean functions are represented by
ordinary differential equations (ODEs) can be used to describe phenomenons
dynamical in nature, which are abundant in areas such as biology, climatology
and genetics. The estimation of parameters of ODE based models is essential for
understanding its dynamics, but the lack of an analytical solution of the ODE
makes the parameter estimation challenging. The aim of this paper is to propose
a general and fast framework of statistical inference for ODE based models by
relaxation of the underlying ODE system. Relaxation is achieved by a properly
chosen numerical procedure, such as the Runge-Kutta, and by introducing
additive Gaussian noises with small variances. Consequently, filtering methods
can be applied to obtain the posterior distribution of the parameters in the
Bayesian framework. The main advantage of the proposed method is computation
speed. In a simulation study, the proposed method was at least 14 times faster
than the other methods. Theoretical results which guarantee the convergence of
the posterior of the approximated dynamical system to the posterior of true
model are presented. Explicit expressions are given that relate the order and
the mesh size of the Runge-Kutta procedure to the rate of convergence of the
approximated posterior as a function of sample size
Unified Bayesian and conditional frequentist testing procedures
In hypothesis testing, the conclusions from Bayesian and Frequentist approaches can differ markedly, especially in the reporting of error probabilities. Recently, Berger, Brown and Wolpert (1994) have shown that the Conditional Frequentist method can be made exactly equivalent to the Bayesian method for simple vs. simple hypothesis testing. This was extended to testing of a simple null versus a composite alternative in Berger, Boukai, and Wang (1997a, 1997b). This thesis extends this unification in two further directions: to composite null hypotheses and to testing in discrete settings. Many composite null hypotheses can be reduced to simple null hypotheses through invariance arguments. The first part of the thesis demonstrates how to construct default Bayesian tests which can similarly be reduced to testing of this simple null hypothesis, allowing the methods of Berger, Boukai, and Wang to be applied. To achieve the unification, one must carefully choose the prior distributions used in the Bayesian analysis. Under the null hypothesis, it is shown that the parameters must be assigned the right Haar measure as a noninformative prior. Priors on the alternative must be compatible with this choice, and also deal with the presence of additional parameters (when the alternative is more complex than the null). When additional parameters are present, ideas from modern Bayesian testing theory, such as \u27intrinsic priors from fractional or intrinsic Bayes factors\u27 are utilized in the development. Intrinsic priors arising from using intrinsic Bayes factors in hypotheses testing are shown to be proper under general group invariance conditions. For the unified test, the conditional Type I error is constant over the original null hypothesis, and is equal to the posterior probability of the null. This equivalence is of primary importance in the unification. A number of testing scenarios are studied as illustrations of the methodology. Another issue related to testing is studied, namely, the choice of the sample size to achieve desired conditional goals. Unification for testing problems involving discrete distributions is achieved via randomization. The effect of randomization is minimal for decision making, unlike in unconditional frequentist testing, where the effect can be considerable
- …